In the field of deep medical image segmentation, TransUNet (merit both Transformers and U-Net) is one of the current advanced segmentation models. However, the local connection between adjacent blocks in its encoder is not considered, and the inter-channel information is not interactive during the upsampling process of the decoder. To address the above problems, a Multi-attention FUsion Network (MFUNet) model was proposed. Firstly, a Feature Fusion Module (FFM) was introduced in encoder part to enhance the local connections between adjacent blocks in the Transformer and maintain the spatial location relationships of the images themselves. Then, a Double Channel Attention (DCA) module was introduced in the decoder part to fuse the channel information of multi-level features, which enhanced the sensitivity of the model to the key information between channels. Finally, the model's constraints on the segmentation results was strengthened by combining cross-entropy loss and Dice loss. By conducting experiments on Synapse and ACDC public datasets, it can be seen that MFUNet achieves Dice Similarity Coefficient (DSC) of 81.06% and 90.91%, respectively. Compared with the baseline model TransUNet, MFUNet achieved an 11.5% reduction in Hausdorff Distance (HD) on the Synapse dataset, and improved segmentation accuracy by 1.43 and 3.48 percentage points on the ACDC dataset for both the right ventricular and myocardial components, respectively. The experimental results show that MFUNet can achieve better segmentation results in both internal filling and edge prediction of medical images, which can help improve the diagnostic efficiency of doctors in clinical practice.